226 research outputs found

    Hunting active Brownian particles: Learning optimal behavior

    Full text link
    We numerically study active Brownian particles that can respond to environmental cues through a small set of actions (switching their motility and turning left or right with respect to some direction) which are motivated by recent experiments with colloidal self-propelled Janus particles. We employ reinforcement learning to find optimal mappings between the state of particles and these actions. Specifically, we first consider a predator-prey situation in which prey particles try to avoid a predator. Using as reward the squared distance from the predator, we discuss the merits of three state-action sets and show that turning away from the predator is the most successful strategy. We then remove the predator and employ as collective reward the local concentration of signaling molecules exuded by all particles and show that aligning with the concentration gradient leads to chemotactic collapse into a single cluster. Our results illustrate a promising route to obtain local interaction rules and design collective states in active matter.Comment: to appear in Phys. Rev.

    A ”Framework” for Object Oriented Frameworks Design

    Get PDF
    corecore